10 research outputs found
Learning over time using a neuromorphic adaptive control algorithm for robotic arms
In this paper, we explore the ability of a robot arm to learn the underlying
operation space defined by the positions (x, y, z) that the arm's end-effector
can reach, including disturbances, by deploying and thoroughly evaluating a
Spiking Neural Network SNN-based adaptive control algorithm. While traditional
control algorithms for robotics have limitations in both adapting to new and
dynamic environments, we show that the robot arm can learn the operational
space and complete tasks faster over time. We also demonstrate that the
adaptive robot control algorithm based on SNNs enables a fast response while
maintaining energy efficiency. We obtained these results by performing an
extensive search of the adaptive algorithm parameter space, and evaluating
algorithm performance for different SNN network sizes, learning rates, dynamic
robot arm trajectories, and response times. We show that the robot arm learns
to complete tasks 15% faster in specific experiment scenarios such as scenarios
with six or nine random target points
Neuromorphic Visual Odometry with Resonator Networks
Autonomous agents require self-localization to navigate in unknown
environments. They can use Visual Odometry (VO) to estimate self-motion and
localize themselves using visual sensors. This motion-estimation strategy is
not compromised by drift as inertial sensors or slippage as wheel encoders.
However, VO with conventional cameras is computationally demanding, limiting
its application in systems with strict low-latency, -memory, and -energy
requirements. Using event-based cameras and neuromorphic computing hardware
offers a promising low-power solution to the VO problem. However, conventional
algorithms for VO are not readily convertible to neuromorphic hardware. In this
work, we present a VO algorithm built entirely of neuronal building blocks
suitable for neuromorphic implementation. The building blocks are groups of
neurons representing vectors in the computational framework of Vector Symbolic
Architecture (VSA) which was proposed as an abstraction layer to program
neuromorphic hardware. The VO network we propose generates and stores a working
memory of the presented visual environment. It updates this working memory
while at the same time estimating the changing location and orientation of the
camera. We demonstrate how VSA can be leveraged as a computing paradigm for
neuromorphic robotics. Moreover, our results represent an important step
towards using neuromorphic computing hardware for fast and power-efficient VO
and the related task of simultaneous localization and mapping (SLAM). We
validate this approach experimentally in a simple robotic task and with an
event-based dataset, demonstrating state-of-the-art performance in these
settings.Comment: 14 pages, 5 figures, minor change
Neuromorphic Visual Scene Understanding with Resonator Networks
Inferring the position of objects and their rigid transformations is still an
open problem in visual scene understanding. Here we propose a neuromorphic
solution that utilizes an efficient factorization network based on three key
concepts: (1) a computational framework based on Vector Symbolic Architectures
(VSA) with complex-valued vectors; (2) the design of Hierarchical Resonator
Networks (HRN) to deal with the non-commutative nature of translation and
rotation in visual scenes, when both are used in combination; (3) the design of
a multi-compartment spiking phasor neuron model for implementing complex-valued
vector binding on neuromorphic hardware. The VSA framework uses vector binding
operations to produce generative image models in which binding acts as the
equivariant operation for geometric transformations. A scene can therefore be
described as a sum of vector products, which in turn can be efficiently
factorized by a resonator network to infer objects and their poses. The HRN
enables the definition of a partitioned architecture in which vector binding is
equivariant for horizontal and vertical translation within one partition and
for rotation and scaling within the other partition. The spiking neuron model
allows mapping the resonator network onto efficient and low-power neuromorphic
hardware. In this work, we demonstrate our approach using synthetic scenes
composed of simple 2D shapes undergoing rigid geometric transformations and
color changes. A companion paper demonstrates this approach in real-world
application scenarios for machine vision and robotics.Comment: 15 pages, 6 figures, minor change
Recommended from our members
Enhancing gamma ray detection and imaging characteristics of Compact Compton Imager (CCI) employing signal decomposition algorithms
The work presented in this thesis reflects a small piece in the complex mosaic of radiation detector developments. Advanced concepts in signal processing are being studied to enhance efficiency and spatial resolution of gamma-ray imaging systems. The ultimate goal of the work presented here is using signal processing, specifically signal decomposition (SD) algo- rithms, to get better imaging efficiency and angular resolution of the gamma-ray imaging system.Position sensitive semiconductors (PSD) are important tools for gamma-ray detection and imaging. Substantial progress has been made in the development of these detectors, such as demonstration of 3D position readout, and demonstration for Compton imaging applications. One approach of position-sensitive readout is the use of high purity germanium Double Sided Segmented Detectors (DSSD).Compton Imaging is now an established gamma ray imaging modality for energies rang- ing from about 200 keV to several MeV. Fundamentally, the performance of a Compton imager is limited by Doppler broadening. However, the current performance of Compact Compton Imaging systems is limited by intrinsic detector properties such as position and energy resolution and the ability to resolve individual interactions, as well.We have developed and benchmarked signal processing techniques to improve the position resolution to be significantly better than given by the voxel size. Specifically, we have developed Signal Decomposition (SD) algorithms, which are based on physics models of the charge creation and transport processes and mathematical techniques such as singular value decomposition to infer the energy and three-dimension position of individual gamma-ray interactions.The experimental results presented in this dissertation demonstrate the performance of a Compton Camera system in the second generation of the Compact Compton Imager (CCI2) configuration. This system utilizes two planar double sided segmented HPGe detectors. The segment size of these detectors is 2 mm. Using SD we were able to achieve a spatialresolution of about 0.5 mm resulting in about 800k spatial voxels in both HPGe double-sided strip detectors (DSSD) characterized by an area of about 60 cm2 and a thickness of 1.5 cm and read out by only 76 individual and discrete preamplifiers. The increase in granularity significantly improves the imaging resolution and efficiency, which is the ultimate goal.The best performance of the CCI2 system to date with respect to angular resolution has been achieved by the SPEctroscopic Imager for gamma-Rays (SPEIR) imaging system. Compared to the SPEIR system, angular resolution due to the SD algorithm is improved by several degrees and an increase in imaging efficiency of a factor of two has been demonstrated. Applying SD to the best quality events(single pixel events in both detectors), the angular of 3 degrees was achieved and corresponding imaging efficiency of 0.3%.In order to benchmark the performance of the CCI2 system with SD algorithm, GEANT4 simulations were performed. An angular resolution of 3 degrees corresponds to 0.75 mm posi- tion resolution in all three dimensions, according to the simulations. Significant benefits can be attributed to signal decomposition algorithms. However, there is room for improvement in the algorithm itself in order to achieve 2.5 degrees angular resolution, which would be the ultimate goal for the CCI2 system
Bioinspired smooth neuromorphic control for robotic arms
Beyond providing accurate movements, achieving smooth motion trajectories is a long-standing goal of robotics control theory for arms aiming to replicate natural human movements. Drawing inspiration from biological agents, whose reaching control networks effortlessly give rise to smooth and precise movements, can simplify these control objectives for robot arms. Neuromorphic processors, which mimic the brain’s computational principles, are an ideal platform to approximate the accuracy and smoothness of biological controllers while maximizing their energy efficiency and robustness. However, the incompatibility of conventional control methods with neuromorphic hardware limits the computational efficiency and explainability of their existing adaptations. In contrast, the neuronal subnetworks underlying smooth and accurate reaching movements are effective, minimal, and inherently compatible with neuromorphic hardware. In this work, we emulate these networks with a biologically realistic spiking neural network for motor control on neuromorphic hardware. The proposed controller incorporates experimentally-identified short-term synaptic plasticity and specialized neurons that regulate sensory feedback gain to provide smooth and accurate joint control across a wide motion range. Concurrently, it preserves the minimal complexity of its biological counterpart and is directly deployable on Intel’s neuromorphic processor. Using the joint controller as a building block and inspired by joint coordination in human arms, we scaled up this approach to control real-world robot arms. The trajectories and smooth, bell-shaped velocity profiles of the resulting motions resembled those of humans, verifying the biological relevance of the controller. Notably, the method achieved state-of-the-art control performance while decreasing the motion jerk by 19% to improve motion smoothness. Overall, this work suggests that control solutions inspired by experimentally identified neuronal architectures can provide effective, neuromorphic-controlled robots
Neuromorphic Visual Scene Understanding with Resonator Networks (in brief)
Inferring the position of objects and their rigid transformations is still an open problem in visual scene understanding. Here we propose a neuromorphic framework that poses scene understanding as a factorization problem and uses a resonator network to extract object identities and their transformations. The framework uses vector binding operations to produce generative image models in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which in turn can be efficiently factorized by a resonator network to infer objects and their poses. We also describe a hierarchical resonator network that enables the definition of a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition, and for rotation and scaling within the other partition. We demonstrate our approach using synthetic scenes composed of simple 2D shapes undergoing rigid geometric transformations and color changes
Video_1_Adaptive control of a wheelchair mounted robotic arm with neuromorphically integrated velocity readings and online-learning.MOV
Wheelchair-mounted robotic arms support people with upper extremity disabilities with various activities of daily living (ADL). However, the associated cost and the power consumption of responsive and adaptive assistive robotic arms contribute to the fact that such systems are in limited use. Neuromorphic spiking neural networks can be used for a real-time machine learning-driven control of robots, providing an energy efficient framework for adaptive control. In this work, we demonstrate a neuromorphic adaptive control of a wheelchair-mounted robotic arm deployed on Intel’s Loihi chip. Our algorithm design uses neuromorphically represented and integrated velocity readings to derive the arm’s current state. The proposed controller provides the robotic arm with adaptive signals, guiding its motion while accounting for kinematic changes in real-time. We pilot-tested the device with an able-bodied participant to evaluate its accuracy while performing ADL-related trajectories. We further demonstrated the capacity of the controller to compensate for unexpected inertia-generating payloads using online learning. Videotaped recordings of ADL tasks performed by the robot were viewed by caregivers; data summarizing their feedback on the user experience and the potential benefit of the system is reported.</p